Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Prof. Jyoti Madake, Achal Ninawe, Sachi Nagdeve, Nishant Wankhede, Vinod Panzade, Prof. Dr. Shripad Bhatlawande
DOI Link: https://doi.org/10.22214/ijraset.2023.52352
Certificate: View Certificate
Zebra-crossing recognition is a challenging activity for visually impaired navigating safely. This paper proposes a zebra-crossing scene recognition model-based fusion of texture features and geometrical features. The Gabor features for 8 orientations are extracted for different orientations Zebra Crossing lines. The SIFT is employed for keypoint detection and description. The final features vector is obtained by concatenating Gabor and SIFT features. The feature vector is optimized using K-means clustering and Principle Component Analysis (PCA). The optimized feature vectors were classified using five Classifiers, SVM, Random Forest, Decision Tree, Naïve Bayes Classifiers. The proposed model successfully recognizes the crosswalks with 89% accuracy. The different classifier models are evaluated for Precision, Accuracy, F1 Score, Recall, and RoC. This paper specifies the classification of two class positive which is zebra-crossing and negative which is no zebra-crossing. The designed system will classify either the crosswalk is detected or not on the basis of percentage specified for each class. This can be done by matching the features of the trained dataset. Those features are matched with the real videos or images to recognize the motion and classify into specific class. More specific crosswalk detection in real time can be executed to get more specific type of positive and negative class and classification.
I. INTRODUCTION
The zebra crossing detection method is extensively researched in several disciplines. The method is used in the advanced driver assistance system to locate where the zebra crossing is. The methods assist the blind in crossing the street on their own. An important traffic signal that indicates a secure crossing of the street is the zebra crossing. For safe travel, the visually impaired should have access to the zebra crossing's accessible area. Also, the visually handicapped request the zebra crossing's direction in order to change their course of movement. The crossing photos are classified as crosswalks using multiple methods and the matching of features extracted is followed using the ORB algorithm [1]. In order to create an image-based tool that would enable blind people to recognize important information on their own for safely navigating a crossing. According to figures from the
World Health Organization, there are about 40 million blind persons in the world. It is quite challenging for blind persons
to carry out any kind of action on their own. Zebra crossing is one of the most important elements of the road system and of driving safety. Zebra crossings exist to safeguard both vehicles and pedestrians. But for those who are blind, it becomes tough. Discovering crosswalks, figuring out when the walk interval begins, keeping a straight line while crossing, and finishing crossings before the start of opposing traffic are all challenges for blind persons. Also, according to the current situation, many people continue to pass away while crossing the road. There are many challenges that blind individuals face while crossing the street, and the importance of zebra crossings in ensuring their safety. To address this issue, a computer vision-based approach is proposed for zebra crossing detection. The system uses a dataset of zebra crossing images for training and a camera module for collecting real-time data as the test dataset. Various feature extraction and matching algorithms are used to classify the zebra crossings, and machine learning algorithms like SVM, Random Forest, Decision Tree, and Naïve Bayes are used for the classification. This approach can enable blind individuals to navigate zebra crossings safely and independently, improving their quality of life.
A blind person needs information regarding the position of the crossing, or if it is in his frontal area, as well as an estimate of its length and the status of traffic lights to cross a road safely. Typically, blind individuals utilize a white travel aid: a cane. A cane has a relatively limited range for detecting unique patterns or impediments. When it is safe to cross the road, certain traffic signals contain beepers that signal the blind person to do so. However, such technology is not always present at crossings. Perhaps it would take too much time to install and maintain such equipment at each crossing [2].
Blind people cannot see, but they can hear. New opportunities to create an intelligent navigation system for blind people have opened up with the introduction of quick and inexpensive digital portable laptop computers with multimedia computing to transform audio-video streams in real-time. The paper represents a computer vision-based approach for zebra crossing detection [3]. The system captures real-time data for the detection process. The dataset is divided into train and test training datasets consisting of zebra crossing images using the camera module will collect the real-time data which is used as a test dataset. Then features are matched by using the algorithms like SIFT, ORB, etc. After matching, classification is done by using SVM, Random Forest, Decision Tree, and Naïve Bayes to finally detect the crosswalk.
II. LITERATURE REVIEW
Zebra-crossings are detected by looking for groups of concurrent lines [1] where three methods are used for color detection and segmentation which includes RGB images being converted into IHLS color space and these methods are tested on outdoor images [2] and many other threshold image techniques such as Gaussian filter, Canny edge detection Contour, and Fit Ellipse [3][4] are used for traffic sign recognition with Kalman filter [5] which also includes Block-based Hough proposed by Yu-Quin Bao [6] and transform and directional variance techniques [7], a novel approach to detect and locate the zebra-crossings and the system is found out to be feasible for use on public roads around the world [8] to obtain 13,40 high-quality photo-realistic images from the video from 13 classes of various objects [9]. The design of a low-power, low-latency electronic mobility assistance for blind persons revealed that decision trees, random forests, and KNNs may all be used to recognise objects [10]. The pedestrian walk detection system is adopted by HOG and LBPH methods worked together with the SVM algorithm [11] where which uses segmentation according to the color of the pixel, classifies shape using linear SVM performs form content recognition using Gaussian-kernel SVMs [12] and to model the video they use GMM [13] which depends on YOLOv3 real-time data and applied kinematic based filter [14] for vehicle detection and to obtain high accuracy for vehicle speed SVM and HOG [15] methods are used [16]. Many application based model have been developed one of those is, An Electronic Travel Aid for Navigation of Visually Impaired Persons [17], a means through which a blind person can autonomously navigate a new environment.
Lane detection is another factor that can be detected using ROC and DET for accurate canny edge detection, Flood Fill canny edge detector [18], and Sobel operators [19] where local image orientations and line segments are calculated [20] and Gabor filter is sometimes used to reduce noise [21]. Motion detection [22] used by Zhang along with HOG for edge detection is another important factor that uses the Gaussian Mixture Model (GMM) where data is processed through HCC with an accuracy of 91.5% at a resolution of 720p [23]. CNN algorithm can be used which gives good accuracy if the image resolution is good and the white paintings on the crossing are fine as well as there is no vehicle obstruction [24].
LDA grayscale algorithm to process images and the use of the EDLines algorithm for edge straight line detection [25] will help to achieve more accuracy of 90.3%. Real-time traffic sign recognition can be applied using CNN algorithms [26] and the RNN method to help design a unified framework for classifying images [27]. ZebraRecognizer software was also demonstrated by Mascetti et al such that it computes the crossing position precisely with accurate and efficient results [28]. Several other algorithms can work for this detection like the zebra crossing detection CBR module [29] proposed by Z Qujiang Lei, the K-means clustering approach [30] to classify the image, and the accuracy of this method is good along with the RDP algorithm and shape-arc algorithm [31] used for pre-processed image identification. Viola-Jones method [32] to segment road signs based on HSV color space and fuzzy logic [33] which describes how this cascaded classifier works and how integral images are used [34].The Adda boost method is used here on a bird-eye view image called inverse perspective mapping image for zebra crossing detection [35]. A workable camera using an auto-calibration technique suggested by Zhang et al [36] is used for traffic scene surveillance in this research which also provides a practical method for traffic scene surveillance camera auto-calibration in this study can also be considered as further implementation with Fourier transformation [37] and convolutional neural networks [38] can play a vital role while creating our dataset which can be used for zebra crossing detection [39]. Another most unique called Gaussian process dynamical models and probabilistic hierarchical trajectory matching techniques can present a study on pedestrian path prediction. Another step in the progress of implementing this process can be a monocular visual odometry (VO) system called UnDeepVO by focusing on Gaussian process dynamical models (GPDMs) for nonlinear time series analysis [40].
III. METHODOLOGY
This system follows a procedure right from taking the input dataset and performing feature matching of crosswalk then further proceeding with image pre-processing. In pre-processing we, have performed image resizing and augmentation. Further, we, have performed feature extraction using Gabor Filter and SIFT algorithm. Later matching of features if performed as a classifier is taken here into consideration. Individual accuracy for each classifier is predicted.
The above figure describes the system block diagram which explains the overall workflow of the system. Working right from taking input data to processing the image while converting it into greyscale and adjusting the pixel value of each image in the dataset. Further, feature extraction is performed which is an important step to process before classification, feature extraction is performed using ORB and SIFT algorithms. The number of matched features with each image is displayed on the screen. Here we take a real-time image using the video capture function and that image is tested on the trained dataset, train set contains all zebra crossing images, we have we perform feature extraction and further followed by feature matching on the test image matching with every test image. The count of matched features is also visible on the console screen. We later perform classification using multiple algorithms.
Fig. 5 demonstrates the model's general functioning process. Working begins with pre-processing. The preprocessing process begins with both the positive and negative datasets. The data is subsequently used in the Gabor filter. To detect edges, a gabor filter is employed. Then the SIFT descriptor is applied to the output of the Gabor filter. After that, k-means clustering is used. In essence, k-Means is the process of clustering that locates cluster centres and arranges input samples around the clusters. Following that, classification is carried out with the help of four classifiers: SVM, Random Forest, Decision Tree, and naive bayes. SVM provides the highest accuracy among these classifiers.
D. Classification
Once the feature matching is performed the test image will apply various classifiers including, KNN, Random Forest, SVM, and Decision Tree to classify the actions. Here the classification is performed in such a way where the matched features display the percentage of features matched with the train set and display the match percentage of each class and the one with the highest matching features is the detected class for the given test image.
Some important results are calculated using the following formulas:
Precision = TruePositives / (TruePositives + FalsePositives)
F-Measure = (2 * Precision * Recall) / (Precision + Recall)
Recall = TruePositives / (TruePositives + FalseNegatives)
Fig. 6 displays different factors for each classifier which will help to define the better option for the classifier algorithm to classify the positive (zebra-crossing) and negative (no zebra-crossing).
The paper successfully represents a machine learning and computer vision-based approach for zebra crossing detection. Positive and negative are the main classes that are being classified where positive denotes a zebra-crossing class and negative specifies a no zebra-crossing class Multiple algorithms have been used like SVM, Random Forest, Decision Tree, Naive Bayes, which gives the accuracies of 93.00%, 75.00%, 50.00%, 50.00%, respectively. The dataset consists of 200, 135*135 images that are further split into training and testing of crosswalk images. For feature extraction, SIFT algorithms have been utilized, then test images are matched with training features, further, the count of feature matched with each of the train images with real-time test image is displayed and accordingly match percentages for each class is displayed to display the final class of action. The best accuracy is shown by the SVM classifier which clearly shows the best classifier for the system to detect the particular class. We have customized a dataset for training and testing images are real-time images. The system was trained on two different feature extraction techniques SIFT and then we have done feature matching with the same techniques to detect crosswalks. The number of lines that match training and testing images i.e. real-time images tells about accuracy. After performing it was found that SIFT is better for matching.
[1] Fleyeh, \"Road And Traffic Sign Color Detection and Segmentation in Poor Light Conditions\", IAPR Conference on Machine Vision Application (MVA2005), Tsukuba, Japan, 2005. [2] Lorsakul, Auranuch, and Jackrit Suthakorn, \"Traffic sign recognition using a neural network on OpenCV: Toward intelligent vehicle/driver assistance system”, 4th International Conference on Ubiquitous Robots and Ambient Intelligence, 2007. [3] Fleyeh, Hasan, \"Color detection and segmentation for road and traffic signs\", In IEEE Conference on Cybernetics and Intelligent Systems, 2004., vol. 2, pp. 809-814. IEEE, 2004. [4] Se, Stephen, \"Zebra-crossing detection for the partially sighted.\" In Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 2, pp. 211-217. IEEE, 2000. [5] Vishwakarma, Sunil Kumar, and Divakar Singh Yadav, \"Analysis of lane detection techniques using opencv\", 2015 Annual IEEE India Conference (INDICON), pp. 1-4. IEEE, 2015. [6] Jyoti Madake, Mahesh Badade, Mrunal Barve, Shripad Bhatlawande, Swati Shilaskar, “A Real-Time Detection of Indian Traffic Signs for Visually Impaired People”, Intelligent Systems and Applications, ICISA, (LNEE, volume 959), 2022. [7] Soendoro, David, and Iping Supriana, \"Traffic sign recognition with Color-based Method, shape-arc estimation and SVM\", In Proceedings of the 2011 International Conference on Electrical Engineering and Informatics, pp. 1-6. IEEE, 2011. [8] Zhang, Yunzuo, Kaina Guo, Wei Guo, Jiayu Zhang, and Yi Li, \"Pedestrian Crossing Detection Based on HOG and SVM\", Journal of Cybersecurity 3, no. 2, 2021. [9] Zhong, Jiahao, Wei Feng, Qujiang Lei, Shangzhi Le, Xiangying Wei, Yuhe Wang, and Weijun Wang, \"Improved U-net for zebra-crossing image segmentation\", IEEE 6th International Conference on Computer and Communications (ICCC), pp. 388-393. IEEE, 2020. [10] Wu, Xue-Hua, Renjie Hu, and Yu-Qing Bao, \"Block-based hough transform for recognition of zebra crossing in natural scene images\", IEEE Access 7, 2019. [11] Anggadhita, Mahada Panji, and Yuni Widiastiwi, \"Breaches Detection in Zebra Cross Traffic Light Using Haar Cascade Classifier\", 2020 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), pp. 272-277. IEEE, 2020. [12] Zhang, Zhaoxiang, Min Li, Kaigi Huang, and Tieniu Tan, \"Practical camera auto-calibration based on object appearance and motion for traffic scene visual surveillance”, 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8. IEEE, 2008. [13] Sichelschmidt, Sebastian, Anselm Haselhoff, Anton Kummert, Martin Roehder, Björn Elias, and Karsten Berns, \"Pedestrian crossing detection as a part of an urban pedestrian safety system\", 2010 IEEE Intelligent Vehicles Symposium, pp. 840-844. IEEE, 2010 [14] Uddin, Mohammad Shorif, and Tadayoshi Shioyama, \"Detection of pedestrian crossing using bipolarity feature-an image-based technique.\" IEEE Transactions on Intelligent transportation systems 6, no. 4 (2005): 439-445 [15] Khaliluzzaman, Md, and Kaushik Deb, \"Zebra-crossing detection based on geometric features and vertical vanishing point.\" In 2016 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), pp. 1-6. IEEE, 2016. [16] Yu, Samuel, Heon Lee, and John Kim, \"Lytnet: A convolutional neural network for real-time pedestrian traffic lights and zebra crossing recognition for the visually impaired.\" In International Conference on Computer Analysis of Images and Patterns, pp. 259-270. Springer, Cham, 2019. [17] Shripad Bhatlawande, Neel Gokhale, Dewang V. Mehta, Parag Gaikwad, Swati Shilaskar, Jyoti Madake, “Electronic Travel Aid for Crosswalk Detection for Visually Challenged People”, Intelligent Systems and Applications, ICISA, (LNEE, volume 959), 2022. [18] Meem, Mahinul Islam, Pranab Kumar Dhar, Md Khaliluzzaman, and Tetsuya Shimamura. \"Zebra-Crossing Detection and Recognition Based on Flood Fill Operation and Uniform Local Binary Pattern.\" In 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), pp. 1-6. IEEE, 2019. [19] Uddin, Mohammad Shorif, and Tadayoshi Shioyama, \"Detection of pedestrian crossing and measurement of crossing length-an image-based navigational aid for blind people.\" In Proceedings. 2005 IEEE Intelligent Transportation Systems, 2005, pp. 331-336. IEEE, 2005. [20] Pandey, Pranjali Susheel Kumar, and Ramesh Kulkarni, \"Traffic sign detection for advanced driver assistance system\", 2018 International Conference On Advances in Communication and Computing Technology (ICACCT), pp. 182-185. IEEE, 2018.. [21] Mascetti, Sergio, Dragan Ahmetovic, Andrea Gerino, and Cristian Bernaregg, \"Zebra Recognizer: Pedestrian crossing recognition for people with visual impairment or blindness.\" Pattern Recognition 60 (2016): 405-419 [22] Wang, Chao, Chunxia Zhao, and Huan Wang, \"Self-Similarity Based Zebra-Crossing Detection for Intelligent Vehicles.\" The Open Automation and Control Systems Journal 7, no. 1 (2015). [23] Selvi, C. Thirumarai, and J. Amudha, \"Automatic video surveillance system for pedestrian crossing using digital image processing.\" Indian Journal of Science and Technology 12, no. 2 (2019): 1-6. [24] Keller, Christoph G., and Dariu M. Gavrila. \"Will the pedestrian cross? a study on pedestrian path prediction.\" IEEE Transactions on Intelligent Transportation Systems 15, no. 2 (2013): 494-506. [25] Terwilliger, Jack, Michael Glazer, Henri Schmidt, Josh Domeyer, Heishiro Toyoda, Bruce Mehler, Bryan Reimer, and Lex Fridman. \"Dynamics of pedestrian crossing decisions based on vehicle trajectories in large-scale simulated and real-world data.\" arXiv preprint arXiv:1904.04202 (2019) [26] Sheik Farid, Abbas, Farshidreza Haghighi, Sarah Bakhtiari, and Luigi Pariota. \"Safety Margin Evaluation of Pedestrian Crossing through Critical Thresholds of Surrogate Measures of Safety: Area with Zebra Crossing versus Area without Zebra Crossing.\" Transportation Research Record (2022) [27] Yang, Yi, Hengliang Luo, Huarong Xu, and Fuchao Wu. \"Towards real-time traffic sign detection and classification.\" IEEE Transactions on Intelligent transportation systems 17, no. 7 (2015): 2022-2031 [28] Lausser, Ludwig, Friedhelm Schwenker, and Günther Palm. \"Detecting zebra crossings utilizing”, AdaBoost. In ESANN, pp. 535-540. 2008. [29] Fang, Nan, Zhiyong Zhang, Bingcan Xia, and Zichen Yao. \"Polite Zebra Crossing Driver Reminding System Design.\" In Proceedings of the 2021 International Conference on Bioinformatics and Intelligent Computing, pp. 390-394. 2021. [30] Zhang, Zhaoxiang, Min Li, Kaigi Huang, and Tieniu Tan. \"Practical camera auto-calibration based on object appearance and motion for traffic scene visual surveillance.\" In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8. IEEE, 2008. [31] Mathias, Markus, Radu Timofte, Rodrigo Benenson, and Luc Van Gool. \"Traffic sign recognition—How far are we from the solution?.\" In The 2013 international joint conference on Neural networks (IJCNN), pp. 1-8. IEEE, 2013. [32] Anuradha. B,Ramesh Babu. N ,”A Novel approach for Detection and Location of Zebra Crossing”, Machine Vision and Applications, Vol. 14, pp. 157-165, 2003 [33] Dutta, Ayushi. \"Blending the Past and Present of Automatic Image Annotation.\" PhD diss., International Institute of Information Technology, Hyderabad, 2019. [34] [32] Akbari, Younes, Hanadi Hassen, Nandhini Subramanian, Jayakanth Kunhoth, Somaya Al-Maadeed, and Wael Alhajyaseen. \"A vision-based zebra crossing detection method for people with visual impairments.\" In 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), pp. 118-123. IEEE, 2020, [35] Wang, Canyong. \"Research and application of traffic sign detection and recognition based on deep learning.\" In 2018 International Conference on Robots & Intelligent System (ICRIS), pp. 150-152. IEEE, 2018 [36] Maldonado-Bascón, Saturnino, Sergio Lafuente-Arroyo, Pedro Gil-Jimenez, Hilario Gómez-Moreno, and Francisco López-Ferreras. \"Road-sign detection and recognition based on support vector machines.\" IEEE transactions on intelligent transportation systems 8, no. 2 (2007): 264-278. [37] Mogelmose, Andreas, Mohan Manubhai Trivedi, and Thomas B. Moeslund. \"Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey.\" IEEE Transactions on Intelligent Transportation Systems 13, no. 4 (2012): 1484-1497 [38] Fang, Chiung-Yao, Sei-Wang Chen, and Chiou-Shann Fuh. \"Road-sign detection and tracking.\" IEEE transactions on vehicular technology 52, no. 5 (2003): 1329-1341. [39] Bucher, Thomas, Cristobal Curio, Johann Edelbrunner, Christian Igel, David Kastrup, Iris Leefken, Gesa Lorenz, Axel Steinhage, and Werner von Seelen. \"Image processing and behavior planning for intelligent vehicles.\" IEEE Transactions on Industrial electronics 50, no. 1 (2003): 62-75. [40] Wang, Jack M., David J. Fleet, and Aaron Hertzmann. \"Gaussian process dynamical models for human motion.\" IEEE transactions on pattern analysis and machine intelligence 30, no. 2 (2007): 283-298.
Copyright © 2023 Prof. Jyoti Madake, Achal Ninawe, Sachi Nagdeve, Nishant Wankhede, Vinod Panzade, Prof. Dr. Shripad Bhatlawande. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET52352
Publish Date : 2023-05-16
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here